skip to main content


Search for: All records

Creators/Authors contains: "Clifford, Gari D."

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Abstract Background

    The expanding usage of complex machine learning methods such as deep learning has led to an explosion in human activity recognition, particularly applied to health. However, complex models which handle private and sometimes protected data, raise concerns about the potential leak of identifiable data. In this work, we focus on the case of a deep network model trained on images of individual faces.

    Materials and methods

    A previously published deep learning model, trained to estimate the gaze from full-face image sequences was stress tested for personal information leakage by a white box inference attack. Full-face video recordings taken from 493 individuals undergoing an eye-tracking- based evaluation of neurological function were used. Outputs, gradients, intermediate layer outputs, loss, and labels were used as inputs for a deep network with an added support vector machine emission layer to recognize membership in the training data.

    Results

    The inference attack method and associated mathematical analysis indicate that there is a low likelihood of unintended memorization of facial features in the deep learning model.

    Conclusions

    In this study, it is showed that the named model preserves the integrity of training data with reasonable confidence. The same process can be implemented in similar conditions for different models.

     
    more » « less
  2. null (Ed.)
    Routine blood pressure (BP) measurement in pregnancy is commonly performed using automated oscillometric devices. Since no wireless oscillometric BP device has been validated in preeclamptic populations, a simple approach for capturing readings from such devices is needed, especially in low-resource settings where transmission of BP data from the field to central locations is an important mechanism for triage. To this end, a total of 8192 BP readings were captured from the Liquid Crystal Display (LCD) screen of a standard Omron M7 self-inflating BP cuff using a cellphone camera. A cohort of 49 lay midwives captured these data from 1697 pregnant women carrying singletons between 6 weeks and 40 weeks gestational age in rural Guatemala during routine screening. Images exhibited a wide variability in their appearance due to variations in orientation and parallax; environmental factors such as lighting, shadows; and image acquisition factors such as motion blur and problems with focus. Images were independently labeled for readability and quality by three annotators (BP range: 34–203 mm Hg) and disagreements were resolved. Methods to preprocess and automatically segment the LCD images into diastolic BP, systolic BP and heart rate using a contour-based technique were developed. A deep convolutional neural network was then trained to convert the LCD images into numerical values using a multi-digit recognition approach. On readable low- and high-quality images, this proposed approach achieved a 91% classification accuracy and mean absolute error of 3.19 mm Hg for systolic BP and 91% accuracy and mean absolute error of 0.94 mm Hg for diastolic BP. These error values are within the FDA guidelines for BP monitoring when poor quality images are excluded. The performance of the proposed approach was shown to be greatly superior to state-of-the-art open-source tools (Tesseract and the Google Vision API). The algorithm was developed such that it could be deployed on a phone and work without connectivity to a network. 
    more » « less
  3. null (Ed.)
  4. null (Ed.)
  5. null (Ed.)
    Abstract Study Objectives The usage of wrist-worn wearables to detect sleep–wake states remains a formidable challenge, particularly among individuals with disordered sleep. We developed a novel and unbiased data-driven method for the detection of sleep–wake and compared its performance with the well-established Oakley algorithm (OA) relative to polysomnography (PSG) in elderly men with disordered sleep. Methods Overnight in-lab PSG from 102 participants was compared with accelerometry and photoplethysmography simultaneously collected with a wearable device (Empatica E4). A binary segmentation algorithm was used to detect change points in these signals. A model that estimates sleep or wake states given the changes in these signals was established (change point decoder, CPD). The CPD’s performance was compared with the performance of the OA in relation to PSG. Results On the testing set, OA provided sleep accuracy of 0.85, wake accuracy of 0.54, AUC of 0.67, and Kappa of 0.39. Comparable values for CPD were 0.70, 0.74, 0.78, and 0.40. The CPD method had sleep onset latency error of −22.9 min, sleep efficiency error of 2.09%, and underestimated the number of sleep–wake transitions with an error of 64.4. The OA method’s performance was 28.6 min, −0.03%, and −17.2, respectively. Conclusions The CPD aggregates information from both cardiac and motion signals for state determination as well as the cross-dimensional influences from these domains. Therefore, CPD classification achieved balanced performance and higher AUC, despite underestimating sleep–wake transitions. The CPD could be used as an alternate framework to investigate sleep–wake dynamics within the conventional time frame of 30-s epochs. 
    more » « less
  6. null (Ed.)